Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [18]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [19]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[19]:
<matplotlib.image.AxesImage at 0x7fa978f40710>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [20]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[20]:
<matplotlib.image.AxesImage at 0x7fa97b67ac50>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [21]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [22]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    
    # create input tensor for the real image of dimension WxHXC
    input_real = tf.placeholder(tf.float32, 
                                [None, image_width, image_height, image_channels],
                                name = 'input_real')
    
    # create input tensor for the fake image for z_dim dimension
    input_fake = tf.placeholder(tf.float32, 
                                [None, z_dim],
                                name = 'input_z')
    
    # learning rate placeholder
    learn_rate = tf.placeholder(tf.float32, 
                                None,
                                name = 'learning_rate')

    return input_real, input_fake, learn_rate

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

tests.test_model_inputs(model_inputs)
Tests Passed
In [23]:
def relu(node, alpha):
    '''
    Perform leaky rectified linear unit on the node
    :param node: Tensor to pass through linear unit
    :param alpha: scale of rectification of values < 0
    : return rectified linear tensor
    '''
    return tf.maximum(node * alpha, node)


def rn_initializer():
    '''
    Random normal initializer
    '''
    return tf.random_normal_initializer(mean = 0.0, stddev = 0.001)


def xev_init():
    '''
    Xavier initializer
    '''
    return tf.contrib.layers.xavier_initializer(uniform = False)

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).

In [24]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # build the discriminator network
    # input image --> convolution -> convolution -> convolution -> sigmoid -> true/false
    # limit the scope of all trainable variables with 'discriminator'    
    with tf.variable_scope('discriminator', reuse = reuse):
        
        # convolution layer parameters
        padding     = 'same'
        conv_stride = 2
        filter_size = 32
        kernel_size = 3
        alpha       = 0.1
        drop_rate   = 0.8
        
        # Input layer convolution [28*28*Depth --> 14*14*32]
        conv1  = tf.layers.conv2d(images, filter_size, kernel_size, 
                                 strides = conv_stride, padding = padding,
                                 kernel_initializer = rn_initializer())
        conv1 = relu(conv1, alpha)
        
        # Second convolution layer [14*14*32 --> 7*7*64]
        conv2 = tf.layers.conv2d(conv1, filter_size * 2, kernel_size, 
                                 strides = conv_stride, padding = padding,
                                 kernel_initializer = rn_initializer())
        
        conv2 = tf.layers.batch_normalization(conv2, training=True)
        conv2 = relu(conv2, alpha)
        conv2 = tf.layers.dropout(conv2, rate = drop_rate)
        
        # 3rd convolution layer [7*7*64 --> 4*4*128]
        conv3 = tf.layers.conv2d(conv2, filter_size * 4, kernel_size, 
                                 strides= conv_stride, padding= padding,
                                 kernel_initializer = rn_initializer())
        
        conv3 = tf.layers.batch_normalization(conv3, training=True)
        conv3 = relu(conv3, alpha)
        conv3 = tf.layers.dropout(conv3, rate = drop_rate)
        
        # tranform from 3-D to 1-D tensor
        #[batch_size, height, width, depth] = conv3.get_shape().as_list()
        #flat_layer = tf.reshape(conv3, [-1, height * width * depth])
        flat_layer = tf.contrib.layers.flatten(conv3)
                
        # fully connected layer
        #weight = tf.get_variable('weight', [height * width * depth, 1], 
        #                         initializer = xev_init())
            
        #bias   = tf.get_variable('bias', [1],
        #                         initializer = tf.constant_initializer(0))
                       
        #logits = tf.matmul(flat_layer, weight) + bias
        #out    = tf.sigmoid(logits)
        
        logits = tf.layers.dense(flat_layer, 1,
                                 kernel_initializer = rn_initializer())
        
        output  = tf.sigmoid(logits)
        
        return output, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [25]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    
    # build the discriminator network
    # sample --> conneted_layer --> convolution_t -> convolution_t -> convolution_t -> tanh -> output
    # limit the scope of all trainable variables with 'generator'    
    with tf.variable_scope('generator', reuse = (is_train != True)):
        
        # convolution layer parameters
        padding     = 'same'
        conv_stride = 2
        filter_size = 128
        kernel_size = 3
        alpha       = 0.1
        drop_rate   = 0.8
        
        # First layer: fully connected layer 7*7*512
        width   = 7
        height  = 7
        depth   = 512
        n_units = width * height * depth 
        shape   = z.get_shape().as_list()
        
        #weight = tf.get_variable('weight', [shape[-1], n_units], 
        #                          initializer = tf.random_normal_initializer(mean = 0.0, stddev = 0.1))
        #bias   = tf.get_variable('bias', [n_units], 
        #                         initializer = tf.constant_initializer(0))
        #first_layer     = tf.matmul(z, weight) + bias
        
        first_layer = tf.layers.dense(z, n_units)
        
        # reshape 1-D tensor to 3-D tensor for convolution: 7*7*512
        first_layer = tf.reshape(first_layer, [-1, width, height, depth])
        first_layer = tf.layers.batch_normalization(first_layer, training = is_train)
        first_layer = tf.maximum(alpha * first_layer, first_layer)
    
        # Second layer: transposed convolution layer to transform the resolution 
        # from 7*7*512-> 7*7*128
        conv1 = tf.layers.conv2d_transpose(first_layer, filter_size, kernel_size, 
                                           strides = 1, padding = padding,
                                           kernel_initializer = rn_initializer())
              
        conv1 = tf.layers.batch_normalization(conv1, training= is_train)
        conv1 = relu(conv1, alpha)
        conv1 = tf.layers.dropout(conv1, rate = drop_rate)
        
        # Third layer: transposed convolution layer to transform the resolution 
        # from 7*7*128 to 14*14*64
        conv2 = tf.layers.conv2d_transpose(conv1, int(filter_size/2), kernel_size, 
                                           strides = conv_stride, padding = padding,
                                           kernel_initializer = rn_initializer())
        
        conv2 = tf.layers.batch_normalization(conv2, training= is_train)
        conv2 = relu(conv2, alpha)
        conv2 = tf.layers.dropout(conv2, rate = drop_rate)
                    
        # Fourth layer: transposed convolution layer to transform the resolution
        # from 14*14*64 to 14*14*32
        conv3 = tf.layers.conv2d_transpose(conv2, int(filter_size/4), kernel_size, 
                                           strides = 1, padding = padding,
                                           kernel_initializer = rn_initializer())
        
        conv3 = tf.layers.batch_normalization(conv3, training= is_train)
        conv3 = relu(conv3, alpha)
                
        # Fifth layer: transposed convolution layer to transform the resolution
        # from 14*14*32 to 28*28*Channels
        conv4 = tf.layers.conv2d_transpose(conv3, out_channel_dim, kernel_size, 
                                           strides = conv_stride, padding = padding,
                                           kernel_initializer = rn_initializer())
        # Pass it through tanh
        output = tf.tanh(conv4)
               
        return output

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [26]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    
    # label smoothing
    smooth = 0.1
    
    # Generator network output
    g_model = generator(input_z, out_channel_dim, is_train = True)
    
    # Disriminator network output for real image
    d_model_real, d_logits_real = discriminator(input_real, reuse=False)
    
    # Discriminator network output for fake image
    d_model_fake, d_logits_fake = discriminator(g_model, reuse = True)
    
    # Calculate network loss
    real_labels = tf.ones_like(d_logits_real) * (1 - smooth)
    zero_labels = tf.zeros_like(d_logits_fake) 
    
    # get loss for real and fake image classification from discriminator
    d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_real, 
                                                                         labels = real_labels))
    
    d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, 
                                                                         labels = zero_labels))
    
    # add real and fake image losses to find the total loss for discriminator
    d_loss = d_loss_real + d_loss_fake
    
    # get the loss for generator for fake image
    g_labels = tf.ones_like(d_logits_fake)
    g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake, 
                                                                    labels = g_labels))
    return d_loss, g_loss

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [27]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    
    # separate generator and discriminator variables
    t_vars = tf.trainable_variables()
    g_vars = [var for var in t_vars if 'generator' in var.name]
    d_vars = [var for var in t_vars if 'discriminator' in var.name]
    
    update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    
    with tf.control_dependencies(update_ops):
        
        # get optimizers
        d_optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate, beta1 = beta1)
        d_train_opt = d_optimizer.minimize(d_loss, var_list = d_vars)
        
        g_optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate, beta1 = beta1)
        g_train_opt = g_optimizer.minimize(g_loss, var_list = g_vars)
        
        return d_train_opt, g_train_opt

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [28]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [29]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # Build Model
    show_n_images = 20
    
    # get input placeholders
    _, width, height, channels      = data_shape
    input_real, input_z, learn_rate = model_inputs(width, height, channels, z_dim)
    
    # get network losses and optimizers
    d_loss, g_loss           = model_loss(input_real, input_z, channels)
    d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learn_rate, beta1)
            
    train_steps = 0
    print_steps = 50
    show_steps  = 100
    
    # start training
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            train_steps = 0
            for batch_images in get_batches(batch_size):
                
                train_steps += 1
                
                # rescale image range from [-0.5, 0.5] to [-1, 1]
                in_image = batch_images *2.0
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))

                # Run discriminator and generator optimizers
                _ = sess.run(d_train_opt, feed_dict={input_real : in_image, 
                                                     input_z    : batch_z,
                                                     learn_rate : learning_rate
                                                    })
                
                _ = sess.run(g_train_opt, feed_dict={input_z    : batch_z,
                                                     input_real : in_image,
                                                     learn_rate : learning_rate,
                                                    })
                
                if train_steps % print_steps == 0:
                    # print losses at this point
                    train_loss_d = d_loss.eval({input_real: in_image, 
                                                input_z   : batch_z})
                    
                    train_loss_g = g_loss.eval({input_z: batch_z})

                    print("Epoch {}/{}. ".format(epoch_i+1, train_steps),
                          "Discriminator Loss: {:.4f}...".format(train_loss_d),
                          "Generator Loss: {:.4f}".format(train_loss_g))
                    
                    if(train_steps % show_steps) == 0:
                        show_generator_output(sess, show_n_images, input_z, channels, data_image_mode)
    
    print('Trained model.')

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [31]:
batch_size    = 32
z_dim         = 128
learning_rate = 0.0003
beta1         = 0.5

"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs =2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
Epoch 1/50.  Discriminator Loss: 0.3476... Generator Loss: 4.0179
Epoch 1/100.  Discriminator Loss: 3.8149... Generator Loss: 0.0365
Epoch 1/150.  Discriminator Loss: 1.5518... Generator Loss: 0.8506
Epoch 1/200.  Discriminator Loss: 0.8324... Generator Loss: 4.3784
Epoch 1/250.  Discriminator Loss: 0.7249... Generator Loss: 2.1129
Epoch 1/300.  Discriminator Loss: 1.1157... Generator Loss: 0.6413
Epoch 1/350.  Discriminator Loss: 1.0654... Generator Loss: 3.1504
Epoch 1/400.  Discriminator Loss: 1.2435... Generator Loss: 0.6809
Epoch 1/450.  Discriminator Loss: 1.0449... Generator Loss: 0.7908
Epoch 1/500.  Discriminator Loss: 1.0718... Generator Loss: 0.9059
Epoch 1/550.  Discriminator Loss: 1.0952... Generator Loss: 1.0309
Epoch 1/600.  Discriminator Loss: 1.4008... Generator Loss: 0.5076
Epoch 1/650.  Discriminator Loss: 1.2522... Generator Loss: 0.8160
Epoch 1/700.  Discriminator Loss: 1.1320... Generator Loss: 0.8754
Epoch 1/750.  Discriminator Loss: 1.2040... Generator Loss: 1.0087
Epoch 1/800.  Discriminator Loss: 1.4450... Generator Loss: 0.4759
Epoch 1/850.  Discriminator Loss: 1.4425... Generator Loss: 0.6102
Epoch 1/900.  Discriminator Loss: 1.7560... Generator Loss: 0.3077
Epoch 1/950.  Discriminator Loss: 1.3243... Generator Loss: 0.9109
Epoch 1/1000.  Discriminator Loss: 1.3056... Generator Loss: 0.7197
Epoch 1/1050.  Discriminator Loss: 1.1422... Generator Loss: 0.7989
Epoch 1/1100.  Discriminator Loss: 1.2374... Generator Loss: 0.9031
Epoch 1/1150.  Discriminator Loss: 1.3505... Generator Loss: 0.5173
Epoch 1/1200.  Discriminator Loss: 1.3469... Generator Loss: 0.7711
Epoch 1/1250.  Discriminator Loss: 1.3120... Generator Loss: 0.6860
Epoch 1/1300.  Discriminator Loss: 1.2118... Generator Loss: 0.9967
Epoch 1/1350.  Discriminator Loss: 1.3549... Generator Loss: 0.6929
Epoch 1/1400.  Discriminator Loss: 1.2492... Generator Loss: 0.7172
Epoch 1/1450.  Discriminator Loss: 1.3067... Generator Loss: 0.6392
Epoch 1/1500.  Discriminator Loss: 1.2781... Generator Loss: 0.7772
Epoch 1/1550.  Discriminator Loss: 1.2848... Generator Loss: 0.7806
Epoch 1/1600.  Discriminator Loss: 1.2736... Generator Loss: 1.0927
Epoch 1/1650.  Discriminator Loss: 1.1356... Generator Loss: 1.1793
Epoch 1/1700.  Discriminator Loss: 1.2093... Generator Loss: 0.8338
Epoch 1/1750.  Discriminator Loss: 1.3027... Generator Loss: 0.6045
Epoch 1/1800.  Discriminator Loss: 1.1845... Generator Loss: 1.1058
Epoch 1/1850.  Discriminator Loss: 1.1585... Generator Loss: 1.0124
Epoch 2/50.  Discriminator Loss: 1.4235... Generator Loss: 0.5403
Epoch 2/100.  Discriminator Loss: 1.2514... Generator Loss: 1.0322
Epoch 2/150.  Discriminator Loss: 1.1737... Generator Loss: 0.9346
Epoch 2/200.  Discriminator Loss: 1.1723... Generator Loss: 0.8557
Epoch 2/250.  Discriminator Loss: 1.1605... Generator Loss: 0.8579
Epoch 2/300.  Discriminator Loss: 1.2419... Generator Loss: 0.8198
Epoch 2/350.  Discriminator Loss: 1.1379... Generator Loss: 0.8179
Epoch 2/400.  Discriminator Loss: 1.2801... Generator Loss: 0.9463
Epoch 2/450.  Discriminator Loss: 1.2281... Generator Loss: 0.7369
Epoch 2/500.  Discriminator Loss: 1.0696... Generator Loss: 1.1275
Epoch 2/550.  Discriminator Loss: 1.0060... Generator Loss: 1.0197
Epoch 2/600.  Discriminator Loss: 1.3412... Generator Loss: 0.6630
Epoch 2/650.  Discriminator Loss: 1.1263... Generator Loss: 1.1217
Epoch 2/700.  Discriminator Loss: 1.0655... Generator Loss: 1.1032
Epoch 2/750.  Discriminator Loss: 1.1261... Generator Loss: 0.8699
Epoch 2/800.  Discriminator Loss: 1.2230... Generator Loss: 0.8757
Epoch 2/850.  Discriminator Loss: 1.3727... Generator Loss: 0.5483
Epoch 2/900.  Discriminator Loss: 1.1811... Generator Loss: 0.7576
Epoch 2/950.  Discriminator Loss: 0.9980... Generator Loss: 1.0061
Epoch 2/1000.  Discriminator Loss: 1.1236... Generator Loss: 0.8815
Epoch 2/1050.  Discriminator Loss: 0.9814... Generator Loss: 1.0295
Epoch 2/1100.  Discriminator Loss: 1.1174... Generator Loss: 0.8800
Epoch 2/1150.  Discriminator Loss: 1.0964... Generator Loss: 0.9335
Epoch 2/1200.  Discriminator Loss: 1.1812... Generator Loss: 0.8725
Epoch 2/1250.  Discriminator Loss: 1.1578... Generator Loss: 0.8266
Epoch 2/1300.  Discriminator Loss: 1.0334... Generator Loss: 1.0254
Epoch 2/1350.  Discriminator Loss: 1.2646... Generator Loss: 0.7515
Epoch 2/1400.  Discriminator Loss: 1.1099... Generator Loss: 0.8768
Epoch 2/1450.  Discriminator Loss: 1.1715... Generator Loss: 0.6648
Epoch 2/1500.  Discriminator Loss: 1.3034... Generator Loss: 0.6464
Epoch 2/1550.  Discriminator Loss: 1.1405... Generator Loss: 0.9331
Epoch 2/1600.  Discriminator Loss: 1.0351... Generator Loss: 1.0597
Epoch 2/1650.  Discriminator Loss: 1.0235... Generator Loss: 1.2384
Epoch 2/1700.  Discriminator Loss: 1.1853... Generator Loss: 1.3099
Epoch 2/1750.  Discriminator Loss: 0.9694... Generator Loss: 1.5403
Epoch 2/1800.  Discriminator Loss: 1.0691... Generator Loss: 0.8523
Epoch 2/1850.  Discriminator Loss: 1.1217... Generator Loss: 0.8024
Trained model.

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [17]:
batch_size    = 32
z_dim         = 128
learning_rate = 0.0003
beta1         = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
Epoch 1/50.  Discriminator Loss: 1.0054... Generator Loss: 0.7081
Epoch 1/100.  Discriminator Loss: 0.3574... Generator Loss: 3.5733
Epoch 1/150.  Discriminator Loss: 1.0185... Generator Loss: 1.0293
Epoch 1/200.  Discriminator Loss: 1.3213... Generator Loss: 0.7950
Epoch 1/250.  Discriminator Loss: 1.2827... Generator Loss: 0.8698
Epoch 1/300.  Discriminator Loss: 1.2505... Generator Loss: 0.8053
Epoch 1/350.  Discriminator Loss: 1.1537... Generator Loss: 0.9550
Epoch 1/400.  Discriminator Loss: 1.0359... Generator Loss: 1.1204
Epoch 1/450.  Discriminator Loss: 1.2278... Generator Loss: 0.8145
Epoch 1/500.  Discriminator Loss: 1.2517... Generator Loss: 0.7498
Epoch 1/550.  Discriminator Loss: 1.1651... Generator Loss: 0.9019
Epoch 1/600.  Discriminator Loss: 1.3135... Generator Loss: 0.8155
Epoch 1/650.  Discriminator Loss: 1.1055... Generator Loss: 1.3418
Epoch 1/700.  Discriminator Loss: 1.4757... Generator Loss: 0.5328
Epoch 1/750.  Discriminator Loss: 1.1693... Generator Loss: 0.9148
Epoch 1/800.  Discriminator Loss: 1.3177... Generator Loss: 0.8198
Epoch 1/850.  Discriminator Loss: 1.4478... Generator Loss: 0.5691
Epoch 1/900.  Discriminator Loss: 1.4116... Generator Loss: 0.6426
Epoch 1/950.  Discriminator Loss: 1.3178... Generator Loss: 0.7421
Epoch 1/1000.  Discriminator Loss: 1.2667... Generator Loss: 0.8576
Epoch 1/1050.  Discriminator Loss: 1.3004... Generator Loss: 0.7615
Epoch 1/1100.  Discriminator Loss: 1.0262... Generator Loss: 0.9758
Epoch 1/1150.  Discriminator Loss: 1.4172... Generator Loss: 0.6309
Epoch 1/1200.  Discriminator Loss: 1.3226... Generator Loss: 0.9386
Epoch 1/1250.  Discriminator Loss: 1.2146... Generator Loss: 0.9217
Epoch 1/1300.  Discriminator Loss: 1.2343... Generator Loss: 1.0151
Epoch 1/1350.  Discriminator Loss: 1.1520... Generator Loss: 0.9644
Epoch 1/1400.  Discriminator Loss: 1.2559... Generator Loss: 0.8948
Epoch 1/1450.  Discriminator Loss: 1.4374... Generator Loss: 0.7563
Epoch 1/1500.  Discriminator Loss: 1.3269... Generator Loss: 0.6999
Epoch 1/1550.  Discriminator Loss: 1.3089... Generator Loss: 0.7430
Epoch 1/1600.  Discriminator Loss: 1.1849... Generator Loss: 0.7762
Epoch 1/1650.  Discriminator Loss: 1.2224... Generator Loss: 0.8394
Epoch 1/1700.  Discriminator Loss: 1.2179... Generator Loss: 0.9497
Epoch 1/1750.  Discriminator Loss: 1.1790... Generator Loss: 0.9123
Epoch 1/1800.  Discriminator Loss: 1.1991... Generator Loss: 0.9171
Epoch 1/1850.  Discriminator Loss: 1.3151... Generator Loss: 0.8144
Epoch 1/1900.  Discriminator Loss: 1.3420... Generator Loss: 0.7756
Epoch 1/1950.  Discriminator Loss: 1.2046... Generator Loss: 0.8506
Epoch 1/2000.  Discriminator Loss: 1.3548... Generator Loss: 0.6435
Epoch 1/2050.  Discriminator Loss: 1.2700... Generator Loss: 0.9075
Epoch 1/2100.  Discriminator Loss: 1.3446... Generator Loss: 0.7305
Epoch 1/2150.  Discriminator Loss: 1.5166... Generator Loss: 0.6842
Epoch 1/2200.  Discriminator Loss: 1.5456... Generator Loss: 0.5571
Epoch 1/2250.  Discriminator Loss: 1.2921... Generator Loss: 0.7185
Epoch 1/2300.  Discriminator Loss: 1.3208... Generator Loss: 0.8189
Epoch 1/2350.  Discriminator Loss: 1.3109... Generator Loss: 0.8901
Epoch 1/2400.  Discriminator Loss: 1.2479... Generator Loss: 0.9525
Epoch 1/2450.  Discriminator Loss: 1.2534... Generator Loss: 0.7791
Epoch 1/2500.  Discriminator Loss: 1.1267... Generator Loss: 1.0032
Epoch 1/2550.  Discriminator Loss: 1.2285... Generator Loss: 0.7897
Epoch 1/2600.  Discriminator Loss: 1.5449... Generator Loss: 0.6078
Epoch 1/2650.  Discriminator Loss: 1.3422... Generator Loss: 0.6794
Epoch 1/2700.  Discriminator Loss: 1.4083... Generator Loss: 0.6647
Epoch 1/2750.  Discriminator Loss: 1.3873... Generator Loss: 0.7163
Epoch 1/2800.  Discriminator Loss: 1.0638... Generator Loss: 1.0466
Epoch 1/2850.  Discriminator Loss: 1.1640... Generator Loss: 0.9370
Epoch 1/2900.  Discriminator Loss: 1.3773... Generator Loss: 0.7878
Epoch 1/2950.  Discriminator Loss: 1.3864... Generator Loss: 0.8176
Epoch 1/3000.  Discriminator Loss: 1.6141... Generator Loss: 0.5315
Epoch 1/3050.  Discriminator Loss: 1.3569... Generator Loss: 0.8542
Epoch 1/3100.  Discriminator Loss: 1.3126... Generator Loss: 0.8074
Epoch 1/3150.  Discriminator Loss: 1.2922... Generator Loss: 0.7813
Epoch 1/3200.  Discriminator Loss: 1.5679... Generator Loss: 0.5881
Epoch 1/3250.  Discriminator Loss: 1.2545... Generator Loss: 0.7628
Epoch 1/3300.  Discriminator Loss: 1.2735... Generator Loss: 0.7494
Epoch 1/3350.  Discriminator Loss: 1.4753... Generator Loss: 0.6080
Epoch 1/3400.  Discriminator Loss: 1.1820... Generator Loss: 0.9246
Epoch 1/3450.  Discriminator Loss: 1.3006... Generator Loss: 0.9247
Epoch 1/3500.  Discriminator Loss: 1.4171... Generator Loss: 0.6474
Epoch 1/3550.  Discriminator Loss: 1.2867... Generator Loss: 0.7645
Epoch 1/3600.  Discriminator Loss: 1.3756... Generator Loss: 0.6343
Epoch 1/3650.  Discriminator Loss: 1.4549... Generator Loss: 0.7070
Epoch 1/3700.  Discriminator Loss: 1.5148... Generator Loss: 0.7164
Epoch 1/3750.  Discriminator Loss: 1.3975... Generator Loss: 0.6400
Epoch 1/3800.  Discriminator Loss: 1.4678... Generator Loss: 0.7297
Epoch 1/3850.  Discriminator Loss: 1.2004... Generator Loss: 0.8458
Epoch 1/3900.  Discriminator Loss: 1.2887... Generator Loss: 0.7988
Epoch 1/3950.  Discriminator Loss: 1.3172... Generator Loss: 0.7568
Epoch 1/4000.  Discriminator Loss: 1.5088... Generator Loss: 0.6250
Epoch 1/4050.  Discriminator Loss: 1.4718... Generator Loss: 0.5982
Epoch 1/4100.  Discriminator Loss: 1.3972... Generator Loss: 0.7728
Epoch 1/4150.  Discriminator Loss: 1.1972... Generator Loss: 0.7807
Epoch 1/4200.  Discriminator Loss: 1.4293... Generator Loss: 0.6609
Epoch 1/4250.  Discriminator Loss: 1.2944... Generator Loss: 0.7058
Epoch 1/4300.  Discriminator Loss: 1.4237... Generator Loss: 0.7032
Epoch 1/4350.  Discriminator Loss: 1.3127... Generator Loss: 0.7686
Epoch 1/4400.  Discriminator Loss: 1.1988... Generator Loss: 0.9163
Epoch 1/4450.  Discriminator Loss: 1.5041... Generator Loss: 0.5945
Epoch 1/4500.  Discriminator Loss: 1.4653... Generator Loss: 0.6637
Epoch 1/4550.  Discriminator Loss: 1.2055... Generator Loss: 0.8208
Epoch 1/4600.  Discriminator Loss: 1.2972... Generator Loss: 0.8338
Epoch 1/4650.  Discriminator Loss: 1.5080... Generator Loss: 0.5970
Epoch 1/4700.  Discriminator Loss: 1.4901... Generator Loss: 0.6255
Epoch 1/4750.  Discriminator Loss: 1.4698... Generator Loss: 0.7348
Epoch 1/4800.  Discriminator Loss: 1.4705... Generator Loss: 0.6724
Epoch 1/4850.  Discriminator Loss: 1.6141... Generator Loss: 0.5724
Epoch 1/4900.  Discriminator Loss: 1.4197... Generator Loss: 0.7079
Epoch 1/4950.  Discriminator Loss: 1.4879... Generator Loss: 0.6854
Epoch 1/5000.  Discriminator Loss: 1.2377... Generator Loss: 0.8197
Epoch 1/5050.  Discriminator Loss: 1.4238... Generator Loss: 0.7704
Epoch 1/5100.  Discriminator Loss: 1.4558... Generator Loss: 0.6489
Epoch 1/5150.  Discriminator Loss: 1.3615... Generator Loss: 0.7513
Epoch 1/5200.  Discriminator Loss: 1.6637... Generator Loss: 0.4981
Epoch 1/5250.  Discriminator Loss: 1.3292... Generator Loss: 0.6662
Epoch 1/5300.  Discriminator Loss: 1.4672... Generator Loss: 0.6090
Epoch 1/5350.  Discriminator Loss: 1.4148... Generator Loss: 0.6818
Epoch 1/5400.  Discriminator Loss: 1.4360... Generator Loss: 0.6932
Epoch 1/5450.  Discriminator Loss: 1.4350... Generator Loss: 0.6653
Epoch 1/5500.  Discriminator Loss: 1.3146... Generator Loss: 0.7422
Epoch 1/5550.  Discriminator Loss: 1.3889... Generator Loss: 0.6779
Epoch 1/5600.  Discriminator Loss: 1.3310... Generator Loss: 0.6554
Epoch 1/5650.  Discriminator Loss: 1.3108... Generator Loss: 0.6707
Epoch 1/5700.  Discriminator Loss: 1.2738... Generator Loss: 0.8438
Epoch 1/5750.  Discriminator Loss: 1.3157... Generator Loss: 0.7515
Epoch 1/5800.  Discriminator Loss: 1.3879... Generator Loss: 0.6970
Epoch 1/5850.  Discriminator Loss: 1.3688... Generator Loss: 0.7307
Epoch 1/5900.  Discriminator Loss: 1.5000... Generator Loss: 0.5437
Epoch 1/5950.  Discriminator Loss: 1.4491... Generator Loss: 0.6222
Epoch 1/6000.  Discriminator Loss: 1.3056... Generator Loss: 0.7283
Epoch 1/6050.  Discriminator Loss: 1.4092... Generator Loss: 0.7500
Epoch 1/6100.  Discriminator Loss: 1.4785... Generator Loss: 0.7029
Epoch 1/6150.  Discriminator Loss: 1.3905... Generator Loss: 0.6522
Epoch 1/6200.  Discriminator Loss: 1.3926... Generator Loss: 0.7169
Epoch 1/6250.  Discriminator Loss: 1.4923... Generator Loss: 0.6685
Epoch 1/6300.  Discriminator Loss: 1.3666... Generator Loss: 0.7537
Trained model.

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.